药物重新利用可以加速鉴定有效化合物用于针对SARS-COV-2的临床使用,并具有先前存在的临床安全数据和已建立的供应链的优势。 RNA病毒(例如SARS-COV-2)操纵细胞途径并诱导亚细胞结构的重组以支持其生命周期。可以使用生物成像技术来量化这些形态学的变化。在这项工作中,我们开发了DEEMD:使用深层神经网络模型在多个实例学习框架内的计算管道,以基于对公开可用RXRX19A数据集的形态分析来确定针对SARS-COV-2有效的推定治疗方法。该数据集由SARS-COV-2未感染的细胞和受感染细胞的荧光显微镜图像组成,有或没有药物治疗。 Deemd首先提取歧视性形态学特征,以产生来自未感染和感染细胞的细胞形态特征。然后在统计模型中使用这些形态学特征,以根据与未感染细胞的相似性估算受感染细胞的应用治疗疗效。 DEEMD能够通过弱监督定位受感染的细胞,而无需任何昂贵的像素级注释。 DEEMD确定已知的SARS-COV-2抑制剂,例如Remdesivir和Aloxistatin,支持我们方法的有效性。可以在其他新兴病毒和数据集上探索DEEMD,以便将来快速识别候选抗病毒药治疗}。我们的实施可在线网络https://www.github.com/sadegh-saberian/deemd
translated by 谷歌翻译
该技术报告描述了在Robocup SPL(Mario)中计算视觉统计的模块化且可扩展的体系结构,该结构在Robocup 2022的SPL Open Research Challenge期间提出,该挑战在曼谷(泰国)举行。马里奥(Mario)是一个开源的,可用的软件应用程序,其最终目标是为Robocup SPL社区的发展做出贡献。Mario带有一个GUI,该GUI集成了多个机器学习和基于计算机视觉的功能,包括自动摄像机校准,背景减法,同型计算,玩家 +球跟踪和本地化,NAO机器人姿势估计和跌落检测。马里奥(Mario)被排名第一。1在开放研究挑战中。
translated by 谷歌翻译
当国家行动对具有等效的奖励和过渡动态时,动物能够从有限的经验中迅速推断出来。另一方面,现代的强化学习系统必须通过反复试验进行艰苦的学习,以使国家行动对相当于价值 - 需要从其环境中进行过多的大量样本。已经提出了MDP同态,将观察到的环境的MDP降低到抽象的MDP,这可以实现更有效的样本策略学习。因此,当可以先验地构建合适的MDP同构时,已经实现了样本效率的令人印象深刻的提高 - 通常是通过利用执业者对环境对称性的知识来实现​​的。我们提出了一种在离散作用空间中构建同态的新方法,该方法使用部分环境动力学模型来推断哪种状态作用对导致同一状态 - 将状态行动空间的大小减少了一个等于动作空间的基数。我们称此方法等效效果抽象。在GridWorld环境中,我们从经验上证明了等效效果抽象可以提高基于模型的方法的无模型设置和计划效率的样品效率。此外,我们在Cartpole上表明,我们的方法的表现优于学习同构的现有方法,同时使用33倍的培训数据。
translated by 谷歌翻译
本文介绍了高速自治种族的弹性导航和计划算法,Indy自主挑战(IAC)。 IAC是一场具有全尺度自动赛车的竞赛,可驾驶高达290 km/h(180英里/小时)。由于赛车的高速振动,GPS/INS系统很容易降解。这些退化的GPS测量可能会导致严重的定位误差,导致严重的崩溃事故。为此,我们提出了一个强大的导航系统,以实现多传感器融合Kalman过滤器。在这项研究中,我们介绍了如何根据概率方法确定测量的降解。基于这种方法,我们可以计算Kalman滤波器校正步骤的最佳测量值。同时,我们介绍了其他弹性导航系统,以便赛车可以在致命的定位失败情况下跟随赛道。此外,本文还涵盖了避免障碍的最佳路径计划算法。为了考虑原始的最佳赛车线,障碍物,车辆动力学,我们提出了一种基于路面的路径规划算法,以确保我们的赛车驾驶在结合的条件下。在实验中,我们将评估我们设计的本地化系统可以处理退化的数据,有时还可以在高速驾驶时防止严重的崩溃事故。此外,我们将描述如何成功完成避免障碍挑战。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
使用深度自动化器来编码地震波形特征的想法,然后在不同的地震应用中使用它们是吸引人的。在本文中,我们设计了测试,以评估使用AutoEncoders作为不同地震应用的特征提取器的这种想法,例如事件辨别(即,地震与噪声波形,地震与爆炸波形和相位拣选)。这些测试涉及在大量地震波形上训练AutoEncoder,无论是均匀的还是超越,然后使用培训的编码器作为具有后续应用层的特征提取器(完全连接层,或卷积层加上完全连接的层)做出决定。通过将这些新设计模型的性能与从头开始培训的基线模型进行比较,我们得出结论,AutoEncoder特征提取器方法可以在某些条件下执行良好,例如当目标问题需要与AutoEncoder编码的功能类似,何时有相对少量的培训数据,并且当使用某些模型结构和培训策略时。在所有这些测试中最佳工作的模型结构是具有卷积层和完全连接的层的过度普遍的AutoEncoder,以进行估计。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
Humans have internal models of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot's inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions influence what this experience is, and therefore influence how people's internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations. We formulate a novel optimization problem to infer the human's learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译